Request Information
Ready to find out what MSU Denver can do for you? We’ve got you covered.
Understanding AI resources offer explanations and insights into the core concepts of AI, including machine learning, deep learning, automation, and generative AI. Explore these aids to gain a clear understanding of AI and see how these powerful technologies are fostering innovation and assisting completion of work duties and responsibilities.
Definition: Artificial intelligence (AI) is a technology -or set of tools and techniques- that enables computers to analyze data, recognize patterns, and make predictions or decisions without being explicitly programmed for each specific task. AI allows computers to mimic human intelligence in certain ways, giving your computer a bit of “brainpower” to help you with work tasks like organizing documents, analyzing data, or writing emails.
Clarifying Metaphor: AI is like a toolbox. Each tool in the AI tool box is designed to help with different tasks, whether it’s analyzing data, recognizing patterns, or making predictions. Just like how you might use different tools in a physical toolbox for various household tasks, AI tools can be applied to a wide range of challenges in the digital world. They’re like your trusty assistants, ready to lend a hand and make your work easier and more efficient.
Definition: Automation refers to the use of technology to perform tasks or processes with minimal human intervention. It involves the creation and deployment of systems, software, or machinery that can execute repetitive or routine tasks automatically. Automation aims to streamline workflows, increase efficiency, and reduce errors by replacing manual labor with automated processes. automation and AI technologies can complement each other, with AI often powering the intelligent capabilities of automated systems to achieve greater efficiency and effectiveness.
Clarifying Metaphor: Imagine you’re leading a small business, and you have a group of employees to whom you provide detailed instructions on how to carry out day-to-day tasks. These employees are also equipped with additional capabilities that allowing them to analyze data, recognize patterns, and make decisions. As they work alongside you, they observe your preferences and habits, gradually learning to anticipate your needs and preferences. Over time, they become more efficient and effective, taking on more responsibilities and freeing up your time to focus on higher-level tasks. These employees are like AI-enhanced automation in that they continuously improve and adapt on their own, making them great at streamlining operations and increasing productivity.
Definition: An algorithm is a set of precise instructions or rules that a computer follows to solve a problem or perform a task. It’s like a recipe that tells the computer exactly what steps to take, in what order, and under what conditions, to achieve a desired outcome. Algorithms are fundamental to computer programming and are used in various applications, from sorting data to processing information and making decisions.
Clarifying Metaphor: An algorithm is like a recipe for a computer to follow. Just as a recipe offers precise instructions for cooking a dish, an algorithm outlines the exact steps that a computer needs to follow to solve a problem or accomplish a task.
Definition: A neural network is a computer system inspired by the structure and function of the human brain, consisting of interconnected nodes (or neuron) arranged in layers. These networks are designed to process information, recognize patterns, and make decisions by passing data through multiple layers of interconnected nodes. Each node performs a simple calculation based on the input it receives and passes its output to other nodes in the network. By adjusting the connections between nodes and the strength of their interactions, neural networks can learn from data and adapt their behavior to perform specific tasks, such as image recognition (e.g. the Amazon lens function in the Amazon app) or language translation.
Clarifying Metaphor: A neural network is like the coaching staff of an NFL team, with each coach specializing in different aspects of the game. Picture a team of offensive and defensive coordinators, position coaches, and strategists, all working together to analyze the opponent’s strengths and weaknesses, devise game plans, and make split-second decisions during a match. Each coach contributes their expertise to the team’s overall strategy, sharing their insights and strategies with other coaches on the staff. Similarly, a neural network consists of interconnected nodes (or neurons), with each node specializing in processing specific types of information.
Definition: Machine learning is a specific type of artificial intelligence (AI) that focuses on teaching computers how to learn from data and improve their performance over time, doing so without being explicitly programmed. It is like training a computer to recognize patterns and make decisions based on examples rather than rigid instructions.
Clarifying Metaphor: Machine learning is like having an amazing apprentice who learns from examples and experience to become better at specific tasks over time and improves without needing constant supervision. Machine learning and your savant apprentice are similar in that as they tackle more challenges, they become increasingly proficient at tasks.
For those looking to dive a little deeper on machine learning, here is a great overview of the different approaches:
Definition: Deep learning is a subset of machine learning that focuses on training artificial neural networks to recognize patterns and make predictions by simulating the structure and function of the human brain. It involves processing large amounts of data through multiple layers of interconnected nodes -units that receive inputs, perform computations, and pass the results to other nodes in a network- to automatically extract information and learn how it is organized and represented. Deep learning algorithms excel at tasks such as image and speech recognition, natural language processing (NLP), and autonomous decision-making, making them powerful tools for solving sophisticated problems in various domains.
Clarifying Metaphor: Deep learning is like a great chess player analyzing a game, studying the board, and identifying moves and piece positions. As they delve deeper into the game, they start recognizing complex strategies, predicting their opponent’s future moves, and visualizing potential outcomes several moves ahead. In a similar way, deep learning algorithms process layers of information in a chess game, gradually identifying intricate patterns and the best strategies to make smart decisions and improve performance over time.
Definition: Generative AI is a type of artificial intelligence (AI) that focuses on creating new content -such as text and images- that is like what humans produce. It involves training algorithms to understand and mimic patterns in existing data so they can generate new, original content based on new combinations of the data it has been provided. Generative AI can be used for a variety of purposes, including creative expression, content creation, and transforming data to help learn new things.
Clarifying Metaphor: Generative AI is like a chef who can invent countless variations of a signature dish by taking a classic recipe and transforming it into a whole array of innovative dishes, each with its own unique twist. In a similar way, generative AI algorithms analyze patterns in existing data -such as recipes- and use that knowledge to create entirely new creations that retain the essence of the originals while introducing novel elements and flavors.
Definition: Natural Language Processing (NLP) is a branch of artificial intelligence focused on enabling computers to understand, interpret, and generate human language in a way that is both meaningful and contextually relevant (i.e. it fits the situation). It involves developing algorithms and techniques to analyze and extract information from large amounts of natural language data, such as text or speech. NLP allows computers to perform tasks such as language translation, sentiment analysis, text summarization, and speech recognition, making it possible for humans and machines to communicate more effectively and efficiently
Clarifying Metaphor: Natural language processing is like having a skilled interpreter who can understand and translate between different languages effortlessly. Imagine you’re in a room with people speaking different languages, and the interpreter listens to each person, understands their words, and translates them into a language that everyone else can understand.
Definition: A large language model (LLM) is a type of AI model that has been trained on vast amounts of text data to understand and generate human-like language. These models use deep learning techniques to process and analyze text, allowing them to perform tasks such as language translation, text summarization, question answering, and text generation. Large language models are characterized by their extensive size, which enables them to capture complex linguistic patterns and generate coherent and contextually relevant text
Clarifying Metaphor: An LLM is like a vast library filled with books from every corner of the world that hold a wealth of knowledge and information on a wide range of topics. These books are meticulously organized, allowing you to find answers to your questions or explore new ideas simply by flipping through the pages. A large language model is like a digital library of text data, comprising millions of documents, articles, and books from diverse sources. It is like having access to a vast repository of human knowledge and language, enabling the model to understand and generate text with remarkable depth and accuracy.
Definition: Prompt engineering refers to the process of designing and refining instructions (or prompts) given to large language models to produce specific responses or behaviors. It involves crafting precise and carefully worded instructions that guide the LLM towards generating outputs that align with the user’s intentions or objectives. The goal of prompt engineering is to use the capabilities of LLMs to produce tailored and high-quality responses for a variety of applications, including text generation, question answering, and content creation.
Clarifying Metaphor: Prompt engineering is like crafting a lesson plan. Just as a teacher carefully designs lesson plans to guide students toward specific learning outcomes, prompt engineers craft precise instructions to guide LLMs toward producing (or generating) desired responses. By adjusting the lesson content and activities (prompts), the teacher can influence the students’ understanding and performance, much like prompt engineers shape the output of LLMs to align with their objectives.
Artificial intelligence is the umbrella under which concepts like machine learning, deep learning, and generative AI stand. Algorithms are the building blocks of AI, serving as instructions given to computers so they can do things like analyze information, recognize patterns, and make predictions.
Machine learning, through algorithms and models, involves learning from data but without explicit programming (i.e. the computer does not need a personal trainer to do cool computer things) while deep learning involves training deep neural networks with multiple layers of connections to handle complex data and tasks, just like the human brain. Generative AI does the things that machine and deep learning do but does so to create new outputs (and not just things like prediction), taking the information it consumes, connecting the dots, and producing new combinations of dots (i.e. nodes).
Natural language processing (NLP) and large language models (LLM) play roles here too as they enable computers to understand and interpret massive amounts of data (known as training data) and then generate human language. These outputs are made possible through prompt engineering which involves designing and refining the instructions given to LLMs to produce specific responses or behaviors.
Most of this stuff has been around for a long time. But things have reached a fever pitch recently (well, at least since the launch of OpenAI’s ChatGPT 3.5 in November 2022) for a few, relatively simple reasons:
Technological advancements: Advancements in computing power, data availability, and algorithmic innovations have propelled AI from theory to practice. These advancements have made it possible to process and analyze vast amounts of data at speeds and scales previously unimaginable, enabling more complex and unique AI uses (and making tools like ChatGPT available to the public). | |
AI is everywhere and available to more people: AI is no longer confined to research labs or specialty sectors like robotics. It is everywhere, from smartphone cameras that can recognize faces to recommendation systems that suggest what standup comedy special to watch next on Netflix. This ubiquity transforms everyday experiences, making AI’s influence both profound and widespread. | |
Advances in natural language understanding and generation: The emergence of large language models and natural language processing technologies has revolutionized how machines understand and generate human language. This has opened new possibilities for human-computer interaction, making technology more user-friendly and enabling new forms of creativity and analysis. If your insurance company’s chatbot seems more “human” these days, it’s probably related to these upgrades. | |
Generative AI and creativity: The advent of generative AI models has blurred the lines between human and machine creativity. These models (e.g. Midjourney, DALL·E) can produce art, music, text, and other creative outputs, challenging our notions of authorship and creativity. This unique moment in AI demonstrates the potential for machines not just to replicate but to innovate, offering tools that can augment human creativity. | |
Economic and social impacts: AI is reshaping industries, creating new markets, and transforming jobs. It offers significant opportunities but also poses challenges, including potential job displacement and ethical considerations around privacy, surveillance, and bias. As we try to wrap our heads around what AI can do and how it will impact our lives and work, these conversations will continue to be a big part of decisions made in personal, professional, and public spheres. |
Researchers have classified AI into seven categories but only three have been realized thus far. In this video, IBM’s Martin Keen lays them out, from narrow AI we know and enjoy today to the other extreme, super AI.
What is really the difference between Artificial intelligence (AI) and machine learning (ML)? Are they actually the same thing? In this video, IBM’s Jeff Crume explains the differences and relationship between AI & ML, as well as how related topics like Deep Learning (DL) and other types and properties of each.
Get a unique perspective on what the difference is between Machine Learning and Deep Learning, explained and illustrated by IBM’s Martin Keen.
Neural networks reflect the behavior of the human brain, allowing computer programs to recognize patterns and solve common problems in the fields of AI, machine learning, and deep learning. IBM’s Martin Keen explains neural networks and how they function in artificial intelligence.
Generative AI has stunned the world with its ability to create realistic images, code, and dialogue. Here, IBM’s Kate Soule explains how a popular form of generative AI, large language models, works.
Large language models (LLMs) are a type of generative pretrained transformer (GPT) that can create human-like text and code. In this video, IBM’s Martin Keen explains what a LLM is, how they relate to foundation models, and then covers how they work and how they can be used.
Prompt tuning is an efficient, low-cost way of adapting an AI foundation model to new downstream tasks without retraining the model and updating its weights. In this video, IBM’s Martin Keen discusses three options for tailoring a pre-trained LLM for specialization including: fine tuning, prompt engineering, and prompt tuning.
In this video, IBM ‘s Suj Perepa explains the differences and values between four types of prompts: Retrieval Augmented Generation (RAG), Chain-of-Thought (COT), ReACT (Reason + Act), and Directional Stimulus Prompting (DSP), providing an example of each method and how they can be best used and even combined.